Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Brain Behav Immun ; 114: 475-487, 2023 11.
Article in English | MEDLINE | ID: mdl-37543247

ABSTRACT

The field of psychoneuroimmunology (PNI) has grown substantially in both relevance and prominence over the past 40 years. Notwithstanding its impressive trajectory, a majority of PNI studies are still based on a relatively small number of analytes. To advance this work, we suggest that PNI, and health research in general, can benefit greatly from adopting a multi-omics approach, which involves integrating data across multiple biological levels (e.g., the genome, proteome, transcriptome, metabolome, lipidome, and microbiome/metagenome) to more comprehensively profile biological functions and relate these profiles to clinical and behavioral outcomes. To assist investigators in this endeavor, we provide an overview of multi-omics research, highlight recent landmark multi-omics studies investigating human health and disease risk, and discuss how multi-omics can be applied to better elucidate links between psychological, nervous system, and immune system activity. In doing so, we describe how to design high-quality multi-omics studies, decide which biological samples (e.g., blood, stool, urine, saliva, solid tissue) are most relevant, incorporate behavioral and wearable sensing data into multi-omics research, and understand key data quality, integration, analysis, and interpretation issues. PNI researchers are addressing some of the most interesting and important questions at the intersection of psychology, neuroscience, and immunology. Applying a multi-omics approach to this work will greatly expand the horizon of what is possible in PNI and has the potential to revolutionize our understanding of mind-body medicine.


Subject(s)
Multiomics , Psychoneuroimmunology , Humans , Metabolome , Immune System , Proteome
2.
Front Digit Health ; 4: 943768, 2022.
Article in English | MEDLINE | ID: mdl-36339512

ABSTRACT

Multiple reporting guidelines for artificial intelligence (AI) models in healthcare recommend that models be audited for reliability and fairness. However, there is a gap of operational guidance for performing reliability and fairness audits in practice. Following guideline recommendations, we conducted a reliability audit of two models based on model performance and calibration as well as a fairness audit based on summary statistics, subgroup performance and subgroup calibration. We assessed the Epic End-of-Life (EOL) Index model and an internally developed Stanford Hospital Medicine (HM) Advance Care Planning (ACP) model in 3 practice settings: Primary Care, Inpatient Oncology and Hospital Medicine, using clinicians' answers to the surprise question ("Would you be surprised if [patient X] passed away in [Y years]?") as a surrogate outcome. For performance, the models had positive predictive value (PPV) at or above 0.76 in all settings. In Hospital Medicine and Inpatient Oncology, the Stanford HM ACP model had higher sensitivity (0.69, 0.89 respectively) than the EOL model (0.20, 0.27), and better calibration (O/E 1.5, 1.7) than the EOL model (O/E 2.5, 3.0). The Epic EOL model flagged fewer patients (11%, 21% respectively) than the Stanford HM ACP model (38%, 75%). There were no differences in performance and calibration by sex. Both models had lower sensitivity in Hispanic/Latino male patients with Race listed as "Other." 10 clinicians were surveyed after a presentation summarizing the audit. 10/10 reported that summary statistics, overall performance, and subgroup performance would affect their decision to use the model to guide care; 9/10 said the same for overall and subgroup calibration. The most commonly identified barriers for routinely conducting such reliability and fairness audits were poor demographic data quality and lack of data access. This audit required 115 person-hours across 8-10 months. Our recommendations for performing reliability and fairness audits include verifying data validity, analyzing model performance on intersectional subgroups, and collecting clinician-patient linkages as necessary for label generation by clinicians. Those responsible for AI models should require such audits before model deployment and mediate between model auditors and impacted stakeholders.

SELECTION OF CITATIONS
SEARCH DETAIL
...